38 research outputs found

    Pricing options and computing implied volatilities using neural networks

    Full text link
    This paper proposes a data-driven approach, by means of an Artificial Neural Network (ANN), to value financial options and to calculate implied volatilities with the aim of accelerating the corresponding numerical methods. With ANNs being universal function approximators, this method trains an optimized ANN on a data set generated by a sophisticated financial model, and runs the trained ANN as an agent of the original solver in a fast and efficient way. We test this approach on three different types of solvers, including the analytic solution for the Black-Scholes equation, the COS method for the Heston stochastic volatility model and Brent's iterative root-finding method for the calculation of implied volatilities. The numerical results show that the ANN solver can reduce the computing time significantly

    A neural network-based framework for financial model calibration

    Full text link
    A data-driven approach called CaNN (Calibration Neural Network) is proposed to calibrate financial asset price models using an Artificial Neural Network (ANN). Determining optimal values of the model parameters is formulated as training hidden neurons within a machine learning framework, based on available financial option prices. The framework consists of two parts: a forward pass in which we train the weights of the ANN off-line, valuing options under many different asset model parameter settings; and a backward pass, in which we evaluate the trained ANN-solver on-line, aiming to find the weights of the neurons in the input layer. The rapid on-line learning of implied volatility by ANNs, in combination with the use of an adapted parallel global optimization method, tackles the computation bottleneck and provides a fast and reliable technique for calibrating model parameters while avoiding, as much as possible, getting stuck in local minima. Numerical experiments confirm that this machine-learning framework can be employed to calibrate parameters of high-dimensional stochastic volatility models efficiently and accurately.Comment: 34 pages, 9 figures, 11 table

    Extracellular Matrix Enhances Therapeutic Effects of Stem Cells in Regenerative Medicine

    Get PDF
    Stem cell therapy is a promising option for regenerative of injured or diseased tissues. However, the extremely low survival and engraftment of transplanted cells and the obviously inadequate recruitment and activation of the endogenous resident stem cells are the major challenges for stem cell therapy. Fortunately, recent progresses show that extracellular matrix (ECM) could not only act as a spatial and mechanical scaffold to enhance cell viability but also provide a supportive niche for engraftment or accelerating stem cell differentiation. These findings provide a new approach for increasing the efficiency of stem cell therapy and may lead to substantial changes in cell administration. In order to take a giant stride forward in stem cell therapy, we need to know much more about how the ECM affects cell behaviours. In this chapter, we provide an overview of the influence of ECM on regulating stem cell maintenance and differentiation. Moreover, the enhancement of supportive microenvironment function of natural or synthetic ECMs in stem cell therapy is discussed

    Machine Learning to Compute Implied Volatility from European/American Options Considering Dividend Yield

    Get PDF
    [Abstract] Computing implied volatility from observed option prices is a frequent and challenging task in finance, even more in the presence of dividends. In this work, we employ a data-driven machine learning approach to determine the Black–Scholes implied volatility, including European-style and American-style options. The inverse function of the pricing model is approximated by an artificial neural network, which decouples the offline (training) and online (prediction) phases and eliminates the need for an iterative process to solve a minimization problem. Meanwhile, two challenging issues are tackled to improve accuracy and robustness, i.e., steep gradients of the volatility with respect to the option price and irregular early-exercise domains for American options. It is shown that deep neural networks can be used as an efficient numerical technique to compute implied volatility from European/American options. An extended version of this work can be found in

    On a Neural Network to Extract Implied Information from American Options

    Get PDF
    [Abstract] Extracting implied information, like volatility and dividend, from observed option prices is a challenging task when dealing with American options, because of the complex-shaped early-exercise regions and the computational costs to solve the corresponding mathematical problem repeatedly. We will employ a data-driven machine learning approach to estimate the Black-Scholes implied volatility and the dividend yield for American options in a fast and robust way. To determine the implied volatility, the inverse function is approximated by an artificial neural network on the effective computational domain of interest, which decouples the offline (training) and online (prediction) stages and thus eliminates the need for an iterative process. In the case of an unknown dividend yield, we formulate the inverse problem as a calibration problem and determine simultaneously the implied volatility and dividend yield. For this, a generic and robust calibration framework, the Calibration Neural Network (CaNN), is introduced to estimate multiple parameters. It is shown that machine learning can be used as an efficient numerical technique to extract implied information from American options, particularly when considering multiple early-exercise regions due to negative interest rates.We would also like to thank Dr.ir Lech Grzelak for valuable suggestions, as well as Dr. Damien Ackerer for fruitful discussions. The author S. Liu would like to thank the China Scholarship Council (CSC) for the financial suppor

    A Robust Semantics-based Watermark for Large Language Model against Paraphrasing

    Full text link
    Large language models (LLMs) have show great ability in various natural language tasks. However, there are concerns that LLMs are possible to be used improperly or even illegally. To prevent the malicious usage of LLMs, detecting LLM-generated text becomes crucial in the deployment of LLM applications. Watermarking is an effective strategy to detect the LLM-generated content by encoding a pre-defined secret watermark to facilitate the detection process. However, the majority of existing watermark methods leverage the simple hashes of precedent tokens to partition vocabulary. Such watermark can be easily eliminated by paraphrase and correspondingly the detection effectiveness will be greatly compromised. Thus, to enhance the robustness against paraphrase, we propose a semantics-based watermark framework SemaMark. It leverages the semantics as an alternative to simple hashes of tokens since the paraphrase will likely preserve the semantic meaning of the sentences. Comprehensive experiments are conducted to demonstrate the effectiveness and robustness of SemaMark under different paraphrases

    MILL: Mutual Verification with Large Language Models for Zero-Shot Query Expansion

    Full text link
    Query expansion is a commonly-used technique in many search systems to better represent users' information needs with additional query terms. Existing studies for this task usually propose to expand a query with retrieved or generated contextual documents. However, both types of methods have clear limitations. For retrieval-based methods, the documents retrieved with the original query might not be accurate enough to reveal the search intent, especially when the query is brief or ambiguous. For generation-based methods, existing models can hardly be trained or aligned on a particular corpus, due to the lack of corpus-specific labeled data. In this paper, we propose a novel Large Language Model (LLM) based mutual verification framework for query expansion, which alleviates the aforementioned limitations. Specifically, we first design a query-query-document generation pipeline, which can effectively leverage the contextual knowledge encoded in LLMs to generate sub-queries and corresponding documents from multiple perspectives. Next, we employ a mutual verification method for both generated and retrieved contextual documents, where 1) retrieved documents are filtered with the external contextual knowledge in generated documents, and 2) generated documents are filtered with the corpus-specific knowledge in retrieved documents. Overall, the proposed method allows retrieved and generated documents to complement each other to finalize a better query expansion. We conduct extensive experiments on three information retrieval datasets, i.e., TREC-DL-2020, TREC-COVID, and MSMARCO. The results demonstrate that our method outperforms other baselines significantly

    I^3 Retriever: Incorporating Implicit Interaction in Pre-trained Language Models for Passage Retrieval

    Full text link
    Passage retrieval is a fundamental task in many information systems, such as web search and question answering, where both efficiency and effectiveness are critical concerns. In recent years, neural retrievers based on pre-trained language models (PLM), such as dual-encoders, have achieved huge success. Yet, studies have found that the performance of dual-encoders are often limited due to the neglecting of the interaction information between queries and candidate passages. Therefore, various interaction paradigms have been proposed to improve the performance of vanilla dual-encoders. Particularly, recent state-of-the-art methods often introduce late-interaction during the model inference process. However, such late-interaction based methods usually bring extensive computation and storage cost on large corpus. Despite their effectiveness, the concern of efficiency and space footprint is still an important factor that limits the application of interaction-based neural retrieval models. To tackle this issue, we incorporate implicit interaction into dual-encoders, and propose I^3 retriever. In particular, our implicit interaction paradigm leverages generated pseudo-queries to simulate query-passage interaction, which jointly optimizes with query and passage encoders in an end-to-end manner. It can be fully pre-computed and cached, and its inference process only involves simple dot product operation of the query vector and passage vector, which makes it as efficient as the vanilla dual encoders. We conduct comprehensive experiments on MSMARCO and TREC2019 Deep Learning Datasets, demonstrating the I^3 retriever's superiority in terms of both effectiveness and efficiency. Moreover, the proposed implicit interaction is compatible with special pre-training and knowledge distillation for passage retrieval, which brings a new state-of-the-art performance.Comment: 10 page
    corecore